5 research outputs found

    Hyperspectral and Multispectral Image Fusion Using the Conditional Denoising Diffusion Probabilistic Model

    Full text link
    Hyperspectral images (HSI) have a large amount of spectral information reflecting the characteristics of matter, while their spatial resolution is low due to the limitations of imaging technology. Complementary to this are multispectral images (MSI), e.g., RGB images, with high spatial resolution but insufficient spectral bands. Hyperspectral and multispectral image fusion is a technique for acquiring ideal images that have both high spatial and high spectral resolution cost-effectively. Many existing HSI and MSI fusion algorithms rely on known imaging degradation models, which are often not available in practice. In this paper, we propose a deep fusion method based on the conditional denoising diffusion probabilistic model, called DDPM-Fus. Specifically, the DDPM-Fus contains the forward diffusion process which gradually adds Gaussian noise to the high spatial resolution HSI (HrHSI) and another reverse denoising process which learns to predict the desired HrHSI from its noisy version conditioning on the corresponding high spatial resolution MSI (HrMSI) and low spatial resolution HSI (LrHSI). Once the training is completes, the proposed DDPM-Fus implements the reverse process on the test HrMSI and LrHSI to generate the fused HrHSI. Experiments conducted on one indoor and two remote sensing datasets show the superiority of the proposed model when compared with other advanced deep learningbased fusion methods. The codes of this work will be opensourced at this address: https://github.com/shuaikaishi/DDPMFus for reproducibility

    Unsupervised Hyperspectral and Multispectral Images Fusion Based on the Cycle Consistency

    Full text link
    Hyperspectral images (HSI) with abundant spectral information reflected materials property usually perform low spatial resolution due to the hardware limits. Meanwhile, multispectral images (MSI), e.g., RGB images, have a high spatial resolution but deficient spectral signatures. Hyperspectral and multispectral image fusion can be cost-effective and efficient for acquiring both high spatial resolution and high spectral resolution images. Many of the conventional HSI and MSI fusion algorithms rely on known spatial degradation parameters, i.e., point spread function, spectral degradation parameters, spectral response function, or both of them. Another class of deep learning-based models relies on the ground truth of high spatial resolution HSI and needs large amounts of paired training images when working in a supervised manner. Both of these models are limited in practical fusion scenarios. In this paper, we propose an unsupervised HSI and MSI fusion model based on the cycle consistency, called CycFusion. The CycFusion learns the domain transformation between low spatial resolution HSI (LrHSI) and high spatial resolution MSI (HrMSI), and the desired high spatial resolution HSI (HrHSI) are considered to be intermediate feature maps in the transformation networks. The CycFusion can be trained with the objective functions of marginal matching in single transform and cycle consistency in double transforms. Moreover, the estimated PSF and SRF are embedded in the model as the pre-training weights, which further enhances the practicality of our proposed model. Experiments conducted on several datasets show that our proposed model outperforms all compared unsupervised fusion methods. The codes of this paper will be available at this address: https: //github.com/shuaikaishi/CycFusion for reproducibility

    A 3D-CNN Framework for Hyperspectral Unmixing with Spectral Variability

    No full text
    International audienceHyperspectral unmixing plays an important role in hyperspectral image processing and analysis. It aims to decompose mixed pixels into pure spectral signatures and their associated abundances. The hyperspectral image contains spatial information in neighborhood regions, and spectral signatures existing in the region also have high correlation. However, most autoencoder (AE) based unmixing methods are pixel-to-pixel methods and ignore these priors. It is helpful to add spectral-spatial information into unmixing methods. A recent trend to deal with this problem is to use convolutional neural networks (CNNs). Our proposed framework uses 3D-CNN based networks to jointly learn spectral-spatial priors. Moreover, previous AE-based unmixing methods use fixed spectral signatures for each pure material. In our work, we use a carefully designed decoder to cope with the endmember variability issue, and variational inference strategy is applied to add uncertainty property into endmembers. To avoid over-fitting, we use structured sparsity regularizers to the encoder networks, and ℓ2,1-loss is added to the estimated abundances to guarantee the sparseness. Experimental results on both simulated and real data demonstrate the effectiveness of our proposed method

    Deep Generative Model for Spatial-spectral Unmixing with Multiple Endmember Priors

    No full text

    Probabilistic Generative Model for Hyperspectral Unmixing Accounting for Endmember Variability

    No full text
    corecore